Scott Benson's profile

Hardware Guide for Octane Render

An Octane-Focused Hardware Guide

Introduction


If you’re new to the world of hardware, the whole process of piecing together and buying a computer – especially for 3D – can be intimidating. There are several categories of components, and a wide range of pricing and performance within each component type. The various components also need to work together effectively to give you the best results.

At any  point in this process, you can head over to the Hardware for Octane Render Facebook group and chat with other enthusiasts to ask questions and start narrowing down component choices.

How to use this guide

The goal of this guide is to help you through the various stages of education and make you aware of the gotchas and limitations so you can spend your money wisely and get a stable, efficient system that caters to your working style.

Section 1: How it all works. We take a brief, high level view of everything that makes up a PC system, and what’s important to Octane. After that, we talk about two key decision-making factors and how those influence how you should be thinking about your build. This is the perfect place to start for beginners or enthusiasts who haven’t built systems for 3D rendering.

Section 2: Component Deep-dive. As the name implies, we do a deep-dive into each component to give you a bit more information about each one and how it relates to Octane. Things get a bit more technical here, so you may have to revisit it a few times along your journey.

Section 3: Multi-GPU systems. Here we concentrate on high-end configurations and all the considerations needed if you intend to buy a professional-grade workstation for Octane.

Section 4: How to Obtain a System. Once you’re ready to buy or build, we cover the different methods of getting a PC and the pros and cons of each method.
Section 1: How it All Works

Octane is a GPU render engine, so it should come as no surprise that when building a system to run it, you’re basically laying down one or more GPUs and then setting up the infrastructure around them to make them run with as much stability and efficiency as possible.

At the core is the GPU. The important parts for Octane are the actual processor itself and the VRAM that comes with the card. These two specs determine how fast Octane runs, and how much in the way of geometry and textures you can load into the card’s memory at a time. Choosing a GPU (or multiple GPUs) depends largely on what kind of scenes you will be working on and your budget.

Once the GPU configuration is settled on, the rest of the system is built around supporting it. Public Enemy #1 in the GPU world is heat. Much of the reason why there are so many options for each GPU type is that manufacturers build custom cooling systems to try to keep the GPU running cool. If it gets too hot, it throttles (reduces speed) to prevent overheating, which kills render performance and causes instability.
 
Most of the rest of the process of building a system depends on how many GPUs you intend to have. A one or two GPU system is pretty easy, and there are lots of options and out there in a wide price range. A three or four GPU system is more complex and requires a lot more thought into cooling, power, and physically fitting the system into a case. More than four GPUs requires some very specialized and expensive kit.

With the GPU configuration and cooling system in mind, the next step is to determine how it’s integrated into the rest of the PC. A typical PC is made up of a CPU, GPU, motherboard, RAM, storage drives and a power supply - all of this is wrapped up neatly in a case.

The CPU’s speed and number of cores isn’t a primary concern for just rendering in Octane (it’s important for other aspects of 3D though). If the goal is to build a system with more than two GPUs, special attention needs to be paid to the number of PCIe lanes supported (this will be covered in more detail later).

The motherboard is what physically connects the GPU(s) to the system. The key things to look for here are how many PCIe slots it has, how fast each of the slots are (determined by # of lanes, like x4, x8, etc), and how far apart they’re spaced. Some GPUs are quite thick and if the slots are too tightly packed together, you can’t physically fit many of them on the board without doing custom extensions. In a multi-GPU system, how far apart the cards should sit from one another depends on how they are cooled and other physical restrictions which we’ll cover more in the multi-GPU section. The motherboard also determines the maximum amount of RAM the system can handle.

The amount of system RAM is important for being able to process and load your geometry and textures into the card’s VRAM, and it also is used as overflow for when your scene is too large to fit in VRAM. Some system RAM is also eaten up by the operating system and other background processes. Having enough RAM will keep the whole computer running well.

Now that the core components are settled on, it’s time to power this thing. This is done via a PSU (power supply unit), which is measured in wattage. Each component is rated for a peak amount of wattage, so you’ll need to add all those up to know what your total power draw is. You can use a wattage calculator to estimate how much power your system will use, and buy a PSU accordingly. Total power draw can spike higher than that, so it’s a good idea to add at least 20% extra on the top of what the calculator says, and if you can swing it, 40-50% extra will keep the system running as stable as possible.

All of these pieces need to go somewhere, so the case is the last step in the process. The case is important to keep the system running cool. If it’s air-cooled, there needs to be enough airflow around all the components to vent out the exhaust heat, and the number of intake and exhaust fans needs to be considered (often sold separately). If it’s water-cooled, it needs enough space to mount the extra gear (water pump, hoses, radiators, etc). It also might need to be quiet or look cool, depending on your personal preferences.

So now you have your checklist of pieces you’ll need to build a system specifically for Octane. Here comes the fun part... how to choose.

Key decision factors
Budget
Most of us will have a budget of some sort for a computer. Octane is very versatile - it can technically run on hardware as far back as 2012 (though it’s going to be very slow and limited), so it’s possible to buy used components and piece together a system on a shoestring budget.To get more out of it and work on more complicated scenes, more recent hardware is required, and that of course comes at a higher price. If you’re looking to buy for a studio and speed is more important than money, the sky’s the limit.

For an Octane-focused system, the bulk of the cost (50%+) should go into the GPU(s). Usually the next most expensive component after that is the CPU. The motherboard, RAM, and storage are in the next tier down, and then the PSU and case are typically the least expensive components, though they both need to be chosen well for the stability of the whole system.

What you need and what you want are often two very different things :). The trick is to find a balance and get a system that’s going to give you the most bang for your buck for the kind of work you need to do. It’s a good idea to keep a written list of the types of tasks you need to do in order to make sure you’re spending money in the right place and have allocated enough to fit your needs. It would also help to write down what you see yourself doing with the system in the next two to three years so you can plan for simple upgrades like more RAM (motherboard must support this) or another GPU (the whole system kind of needs to be planned around this as we’ll soon see).

Super Rough Ballpark Estimates
This section exists just to set expectations. Your needs will determine what’s considered low or high end for you. For someone just starting out who wants to do some 3D to supplement their After Effects workflow, the numbers below will probably be fairly accurate. For someone who intends to make cinematic shorts, doubling or tripling those numbers wouldn’t be unreasonable.

Macintosh prices are pretty well set, so you can just go to Apple’s website to see what you’re in for.

If you’re buying a brand new Windows system in November 2020, you can probably expect to pay $1000-$1500 (USD) for a very basic entry-level system. A low-to-mid end general purpose system might be $1500-$3000. A good mid to upper mid-range rig with two to four GPUs will probably land in the $3000-$7,000 range, and a high end workstation is going to run upwards of $10,000, especially if it’s water cooled. Again, these are very much ballpark estimates - component prices vary wildly from country to country, and you may find a killer deal or find ways to cut costs to bring the price down, or if demand is much higher than supply (like it currently is), you may actually have to pay a premium for some components.

Working Style and Type of Projects
Believe it or not, how you work has a large impact on what you’re going to need. The better you are at optimizing render settings, scene geometry, and making smart choices as to what to do in-render vs. post, or what path to take to achieve an effect can mean a world of difference in how powerful your system needs to be. The more efficient you are, the more you can do with less.
 
To a point.

No matter how efficient you are, if you’re consistently rendering out volumetric animations, super high res displacement closeups, or other things that are just savage on any render engine, you’re going to need some brute force to get you through it.

It’s a good idea to take stock of what kind of working style you have and what kinds of projects you foresee doing. As mentioned before, a GPU has a set amount of processing power, and also a set amount of VRAM. If you are a methodical worker who tends to optimize textures and scene geometry as you go, and/or if you tend to work on smaller scale scenes like product renders or quick motion graphics pieces, you might be able to get away with less VRAM, and therefore be able to pick up several lower cost cards that would process your renders faster than a single more expensive card with a lot of VRAM on it. Conversely, if you favor iteration speed and ideation over optimization (16k HDRIs? 10 million polygon 3D scans? Sure, slam it all in there!), and/or you work on large scenes with a lot of geometry, high res textures and high resolution output, you may start running into the limitations of lower end cards. At that point, you may find yourself needing to spend twice as much for half the processing power in order to be able to keep your whole scene in VRAM so it doesn’t slow down or crash.

The best cards are going to have both the most VRAM (and also be able to be linked together to share a big pool of VRAM) AND the most processing power. You can probably guess what happens to the price at that point.
Section 2: Component Deep-Dive

Now that you’re familiar with the basics, let’s do a deep dive into each component so you can really see how they’re measured and what affects what.

The GPU
Far and away, the most important component of a computer as far as Octane is concerned is the GPU.

As of right now, Octane Render on Windows only supports a wide range of recent NVIDIA GPUs (~2012-present) via CUDA. On macOS, Octane supports newer AMD GPUs via Metal, and even some Intel GPUs. Octane itself supports CUDA on the Mac, but macOS cut off official NVIDIA compatibility after High Sierra, so only older NVIDIA cards Macs running High Sierra or earlier will still work.

Processing Power
Any given GPU has a number of cores and a standard clock speed, similar to a CPU. Both of these things together, along with a few other factors like the architecture itself and the VRAM speed make up the processing power. In recent NVIDIA GPUs, RT cores also help process certain calculations and add to this. All other things being equal (this is important, because other factors make a big difference to overall performance), this is how fast a GPU will render a scene.

AMD and NVIDIA use different types of cores, and they run at different speeds so the number can’t be compared against each other on a spec sheet. How can you get a good sense of how GPUs perform against one another in Octane?

Otoy has created an excellent benchmark system to do this very thing. It’s called Octanebench, and it’s pretty much the gold standard for GPU processing power comparisons. The scores are linear so that an OB200 (OctaneBench 200) rated GPU is about twice as fast as one with a score of 100, and therefore will render the same scene in roughly half the time.

As of this writing (November 2020), Octanebench only works with NVIDIA GPUs, but it will eventually extend to AMD, Intel and any other architecture that runs Octane. There are some preliminary scores for AMD cards (listed below in the AMD section).

It’s also important to note that NVIDIA’s RTX technology has an impact on OB scores for certain scenes. In the official results, you can turn on and off RTX to see how the card will do on a scene where RTX doesn’t contribute.

Quick note about overclocking - yes, it will speed up the render a little. No, it’s not recommended because it comes at the cost of stability.

VRAM
VRAM is a special high-speed type of RAM integrated into the graphics card itself that determines how much in the way of textures and scene geometry can be loaded into memory for Octane to operate on at once. A scene that fits neatly into VRAM will be much faster and more stable than one that can’t. If your card supports RTX, it will actually be turned off if the scene can’t fit entirely into VRAM, slowing performance down even more.

Each GPU will come with a set amount (for example, every RTX 3080 has 10GB, every Vega 20 has 4GB, regardless of brand). The speed of the VRAM itself also helps with the performance of the card.

As mentioned before, your working style and type of projects determines how much VRAM you need, and if you’re doing really heavy scenes, you may have to sacrifice processing power and/or cost in order to have more VRAM on your GPU to work effectively. If you’re very efficient and optimize well, and/or work on smaller scenes that don’t eat up as much VRAM, you can get away with lower-cost cards that can actually render faster.

So what eats up VRAM? 
- Lots of high resolution and/or high bit depth textures or HDRIs.
- Lots of geometry (millions of polygons) - Unoptimized 3D scans are notoriously bad.
- High resolution volumes or volumes with multiple channels - these contain a lot of data that has to load on to the card.

This part is important if you are going to use more than one GPU. As of this writing, in most cases Octane does not pool (or combine) VRAM from multiple cards. It uses the VRAM in the card with the LEAST amount, and then any additional textures or geometry is loaded into system RAM (this is called Out-of-Core memory, or OoC). For example, if you were to put a 24GB card and a 4GB card in, you’d only be able to utilize 4GB of VRAM before it overflows into system RAM (unless you disabled the 4GB card in the Octane settings). Also, scenes loaded into OoC memory currently can’t use RTX acceleration.

Fortunately there are workarounds for this (you can just disable the smaller card and only use the larger one, for instance) that will be discussed more in-depth in the Multi-GPU section, but just be aware of it.

Cooling
It’s worth repeating that heat is Public Enemy #1 in the GPU world. GPUs are heat-throttled, so when they get up above a certain temperature, the performance starts dropping to cut down on the amount of heat produced so the chip doesn’t burn itself out. With proper cooling, the GPU can be kept at a reasonable temperature even when going at full speed so this doesn’t happen.

There are a few different ways a video card can be cooled. Like anything else, there are always tradeoffs. The big factors here are cost, noise and suitability for multiple GPUs. Below is a high-level example of how the most common cooling systems work.
Open air cards. These work through a combination of a metal heatsink with fins that dissipates heat, and then one or more fans that keep airflow going through the heat sink to cool it down. An open-air card is usually the most abundant type of cooling system. It also has the advantage of being reasonably quiet because there are several large fans don’t have to spin very fast to keep the card cool. The main disadvantage is that these cards shouldn’t be stacked up (otherwise the card sitting right under would block the fans and stop it from being effective), so this limits a typical system to two of this type of card (spaced as far away from each other as possible). Open air cards also blow exhaust heat into the case and relies on the case airflow to move the generated hot air out so the whole system doesn’t overheat.

Blower-style cards. These work similarly to open air cards where they have a metal heatsink, but they only have one smaller intake fan that blows the hot air straight through gpu heatsink and out the back of the case. These tend to be noisier when under load due to the smaller fan having to spin faster. They usually are on par costwise with open air cards. The big advantage is that they can be stacked right up against each other, allowing for a normal-sized case to house four cards. Case airflow and ambient room temperature is extremely important here as blower-style cards tend to run hotter and louder than most other types.

Hybrid cards. These cards come with an AIO (all-in-one) liquid cooling unit that routes water through a closed loop to cool the card down. The water carries the generated heat to a radiator that’s off the card where it’s cooled down by one more more fans, and then expels the heat out of the case. These types of cards are expensive and less common compared to air-cooled cards, and you need enough mount points in your case to attach the fan/radiator component of each card, but they’re very quiet and efficient, and keep the cards running very cool.

Custom Loop. These are cards whose entire cooling system was removed and replaced with a  GPU cooling block with ports to hook into a custom liquid cooling loop that goes through the whole case. This is an expensive way to cool a system and requires a bit of maintenance and planning,  but it’s very quiet and very efficient. Four or more cards can easily be stacked in a setup like this with no performance hit. Another big advantage is that since the hardware to watercool a card is much smaller than the hardware to air-cool it, cards that typically take up two or three slots can be shrunk to just one or two slots and more can be stacked up in the same physical space. This also opens up more of a range of motherboards that have their slots packed tighter together.

Case cooling. In addition to just cooling the cards, the airflow throughout the whole case must be good (especially with air-cooled cards) in order to dispel heat inside the case. This means tidying up cables and making sure there’s nothing blocking airflow.

NVIDIA Specific Info (Windows Only)
This section is primarily for Windows users, as AMD cards are not currently supported in Octane for Windows. Technically older versions of NVIDIA cards can run on older versions of macOS, but the setup can be tedious and difficult, and it’s not officially supported. If you’re looking into Octane on the Mac, it’s highly recommended that you use Octane X and choose a compatible AMD GPU instead.

GeForce vs Quadro: NVIDIA divides their GPUs into a few families, but GeForce (gaming) and Quadro (professional) are the two most common ones used for rendering. Increasingly, NVIDIA is getting away from calling their cards by these family names, and instead you’ll see names like “RTX 3090” or “RTX 5000"

RT Cores: Starting with the 2000 series in late 2018, NVIDIA released GPUs that had RT cores in addition to standard ones. RT cores help with certain types of calculations and can vastly speed scenes up (from 1 to 30 times) that use those calculations. At the time of this writing, RTX acceleration can not be used unless your whole scene fits into the VRAM on the card (or cards if they support NVLink)

NVLink and SLI: SLI is an older, deprecated technology where several GPUs can be synced together. If you have a setup like this, it’s recommended that you turn it off, since it doesn’t pool VRAM, doesn’t improve performance, and can actually cause instability.

NVLink is a newer linking technology that’s found in some RTX2000 series cards, as well as the RTX 3090 and most newer Quadro cards. The beauty of NVLink is that it can actually combinepool VRAM, which will be discussed in more detail in the multi-GPU section below.

AMD Specific Info (Mac Only)
As of this writing, Octane does not support AMD cards in Windows, so this section is primarily for Mac users. In 2020, Octane X became available for Apple’s Metal architecture for newer AMD GPUs in macOS. This is really exciting for many Mac users who hadn’t had easy access to a modern GPU rendering engine. Macs are much less customizable and varied than Windows computers, so there aren’t too many things to consider when choosing a GPU.

Supported Architectures
AMD also divides their cards into families, but the branding is a little less cut-and dry than NVIDIA’s. The ones we’re concerned with right now are branded Radeon or Radeon Pro. Currently (As of Octane X PR4, November 2020) there are three supported architectures - Vega, Navi and Polaris. These are found in Macs made mostly after 2016.
Vega-based cards are easy to spot, since they’re usually branded as such (Vega 20, Vega 56, Vega 64, etc). Fun fact - the number after the word “Vega” refers to the number of Compute Units (see above), so generally the higher the number, the faster the card will be in Octane.

Navi and Polaris based cards are found in Macs produced after late 2016. These unfortunately are not called something like “Navi 64”. Instead they’re called “Radeon Pro 5600M Graphics” or Radeon “RX 5700 XT”.  A little digging is needed to determine which card have Navi or Polaris architecture.

Is my Mac Supported?
Using the information above, you can determine if a Mac was configurable with a built-in GPU to be able to run Octane. In order to see if YOUR Mac contains the right card, you need to dig into the system information (Apple Menu>About this Mac). Certain Macs (MacBook Pro, for instance), will have two GPUs in it - an Intel chip and an AMD chip. When you go to “About this Mac”, it will only show the Intel chip, though. In order to see if the Mac also has a Vega or Radeon Pro as well, you’ll need to hit the System Report button, scroll down to Graphics/Displays, and see if it’s listed there.

What if it’s not supported?
If the Mac was made in 2016 or later, odds are good it’ll have a Thunderbolt 3 port. If this is the case, you can hook up an external GPU which is Octane-compatible. More on this in the eGPU section. If you have a much older Mac Pro (2009 “cheesegrater” tower) running an older version of MacOS (High Sierra or earlier), you can try putting a 900 or 1000 series NVIDIA card in it, but that would be more of a fun project rather than a serious workstation.

Apple Silicon Specific Info (Mac Only)
This is a quickly evolving topic, so make sure to check back regularly. Octane X now supports the integrated GPU in Apple’s M1 processors. At the time of this writing, Macs with Apple Silicon do not support eGPUs, however.

The Other Components
As we learned before, when building a computer for the purpose of 3D rendering, the GPU is the star and all the other components are pretty much there to support it. Let’s do a deep-dive into each one. For the most part we’ll be talking about systems with 1-2 graphics cards here, and then discuss special considerations for higher-end ones with 3 or more cards in the Multi-GPU section.

CPU
Clock Speed and Cores: Most 3D DCCs and other graphics software - as of right now - benefit a lot more from higher clock speeds than more cores. This is changing, but not very quickly. Notable exceptions are simulation software like Houdini. Octane sees a little bit of a performance boost from higher clock speeds, and it also helps CPU-intensive tasks like AOV creation.

Chipset: When you’re searching for CPUs, you’ll see terms like x299 or z490 (Intel) or B550 or X570 (AMD). This refers to the chipset, which determines which CPU is compatible with which motherboard. Certain chipsets also support more PCIe lanes and more RAM than others. You’ll start to understand which chipset you’re aiming for as you set your budget and start narrowing down your component list.

PCIe Lanes: This is probably one of the most challenging things to understand. If you’re building a system with one or two GPUs in it, you don’t have to worry about it. It becomes more of an issue when there’s more than two cards.

Essentially, this is the amount of bandwidth the CPU has to talk to all the PCIe devices at once. GPUs are PCIe devices, so this becomes important to us. Each CPU has a set number of PCIe lanes, and those lanes interface with the cards through the PCIe slots in the motherboard. Current-generation (PCIe Gen3) graphics cards want 8 lanes in order to operate at peak efficiency. Most low to mid-range CPUs have 16 PCIe lanes, which is enough to keep two graphics cards happy. This is why if you don’t intend to have more than two cards, most CPUs and motherboards are fair game right now.

PCIe Generation: As of November, 2020, Most components are using PCIe Generation 3. With the introduction of Generation 4, the bandwidth of each lane has been doubled.

System RAM
As mentioned before, once the GPU with the lowest amount of VRAM fills up, Octane can start using system RAM. Depending on what else you have open, this can run out fairly quickly too, so if your card has a low amount of VRAM (say, under 8), and you tend to make larger scenes with millions of polygons and/or lots of large textures, you’ll need enough RAM available for overflow.

As of this writing, 32GB is usually the bare minimum recommended, with 64GB being pretty comfortable for most single GPU systems. A good rule of thumb is that you should have three to four times more RAM than GPU memory for the system to operate efficiently. The reason for the range is that not every scene requires the same amount of RAM usage, and you also want some extra RAM for other processes you’re running on your system.

Always be sure to check the motherboard you intend to buy and make sure it supports the amount RAM you think you’ll need based on the GPUs you are planning on getting now or in the near future.

Motherboard
First and foremost, the motherboard needs to have the right chipset/socket for the CPU you want to use, otherwise the system isn’t going to work at all. Fortunately most site builders or something like pcpartpicker will let you know if what you chose doesn’t work together.
 

It’s also a good idea to look at whether it supports the amount of RAM you want (most will do 64GB, some will do 128GB or higher).
 

If you’re building a multi-GPU system, it’s also important to know how many PCIe slots there are, how many lanes each slot supports, and also physically how far apart they are from one another. We’ll cover slot spacing in more detail in the Multi-GPU section.

Storage
For a standard motion graphics system, the recommendation is usually to have a fast-ish boot drive (SSD), a fast cache drive (NVME) that’s useful for simulations and post production, and one or more large storage drives to store your files. Octane itself isn’t picky about drive speeds, so if the intent of your build is a dedicated rendering workstation, put the money into the GPUs instead.

Power Supply (PSU)
As mentioned way up at the top, the PSU needs to supply enough power to run all the components you choose. It’ll be one of the last components you research or buy. Check out a Wattage Calculator like Seasonic’s that allows you to configure your system and see the estimated power draw. Then double it if you can, or at minimum leave 20% extra overhead for the sake of stability.

Case
This will likely be the last thing on your list. Cooling is covered in the GPU section above, so just make sure the case is large enough to accommodate all your components and has good airflow (or can take a liquid cooling setup if you go down that path). Beyond that, it’s all about style, noise reduction and form factor. If you go for a quiet case, be sure it doesn’t stifle airflow.
Section 3: Multi-GPU Setups
Once you get above two GPUs, the complexity, heat, performance and price all increase with each additional card. There are several ways to configure a computer with several cards, and each one has its advantages and disadvantages. Here are the special considerations you need to know about if you’re planning on 3 or more GPUs (or two high-end GPUs like Quadros or RTX3090s).


Performance
In a Windows-based computer, any combination of NVIDIA cards made after 2012 or so will all be seen by Octane and used to render. On a Mac, any combination of AMD GPUs will work together. You can turn on and off individual cards at any time, and we’ll see why you’d want to do that in the next section.

The Octanebench scores of all the GPUs are added up to give the final system score. This is the easiest part of the equation. More cards = more rendering speed. Surely there’s more to it though, right?

Indeed there is.

VRAM
As mentioned before, When you have multiple GPUs, Octane can only access the VRAM in the card with the LEAST amount.

In a setup where you have two different cards (in this case, let’s say you have a 4GB card and a 24GB card), there are four options:

1. You can turn off the 4GB card in the Octane settings and gain the 20GB of VRAM back, but then you lose the processing power of that second card (which may not make that big a difference). 

2. You could leave both cards active and take extra care to optimize your scene so you’re not going over 4GB of geometry and textures.

3. You could leave both cards active and let the VRAM overflow into system RAM if you have enough and don’t mind the speed hit.

4. You can remove the 4GB card and replace it with another 24GB card if your system (and wallet) allows.

PCIe Lanes
A Multi-GPU setup is where you need to pay special attention to the number of PCIe lanes in the CPU and motherboard (chipset). As we mentioned earlier, for current-generation (PCIe Gen 3), you want 8 PCIe lanes for each card to make them run smoothly.

CPU: Most higher-end consumer Intel and AMD CPUs have enough lanes for up to five GPUs. You will still need to look into it (just Google “Intel i9-10900x PCIe Lanes” and it should tell you), but you’ll have a pretty wide range of options.

Past four GPUs (more than 40 lanes), the options start to get limited. Some AMD Threadrippers have a ton of lanes, so that may be a viable option. Server chips like Intel Xeons, and AMD Epycs tend to have a lot more lanes (sometimes even 64 or 128), and there are boards that support multiple CPUs so that you can double the amount of available lanes (and price). The tradeoff is that these types of processors usually have a lower single-core clock speed, which is important for most 3D DCCs at the moment, as well as other post-production graphics software.

Motherboard: In addition to the number of physical slots to accommodate multiple graphics cards, you have to make sure that each slot can handle 8 lanes (marked as x8 or x16). This is often marked in the documentation. There are usually a few choices with certain chipsets that can handle four x8 slots, and even fewer that can handle more.
There are also motherboards that have a PLX chip on it that better distributes the lanes across the various PCIe slots by switching lanes between them. This effectively doubles the amount of lanes your system sees and allows for more cards on boards that wouldn’t normally support them.

PCIe Slot Layout
The slot layout on the motherboard is another important factor to consider with a multi-GPU setup. As mentioned above, the slots need to have enough bandwidth, but they also need to be spaced appropriately to be able to physically accommodate the number of cards you’re going to have. Each card and motherboard has a different design, so it’s best to check with others who have tried to pair these to make sure it will work. There are workarounds (risers, extension cables, etc), but then you’re limited to the case and how the cards physically fit within it. There are also considerations with the NVLink system as far as how far apart the cards needs to be spaced.
NVLink
NVLink is a high-speed interconnect that works between two supported NVIDIA cards. For this to work, you need a NVLink bridge, and two supported cards that are spaced appropriately apart on the motherboard. By combining cards in this fashion, if one runs out of memory, the VRAM on the other will act as fast out-of-core memory that will not cause RTX to be disabled.
If you intend to go this route, there are a few considerations you need to keep in mind:

Slot Spacing: It’s essential that you research the correct spacing for the cards you intend to use and get the right bridge. Bridges for Quadro cards have 2-3 slot spacing because the reference models (NVIDIA-branded) use blower style cooling and can sit right up against one another. Bridges for GeForce cards have 3-4 slot spacing to accommodate the open air cooling designs for GeForce reference models.

System RAM: You will need double the amount of system RAM for two cards as you would have needed for one in order to fill both cards fully. For example, If you have one RTX 3090 with 24GB of VRAM, you will already want three to four times the amount of system RAM in order to fully fill it (this varies depending on the scene and other RAM usage in your system). This means you’re looking at a minimum of 72GB of system RAM to keep everything happy. If you are running two 3090s that are connected via NVLink, you will actually want a minimum of 144GB of RAM, which is actually more than many consumer-end motherboards will support.

As of this writing, several Quadro cards, as well as the RTX2070 Super, RTX2080 Super, RTX2080, RTX2080 Ti, Titan RTX, and RTX 3090 support NVLink.

Cooling
The slot configuration also affects the cooling type of the cards you can have. Blower-style and liquid cooled cards are designed to work without a gap between them, so if you have a motherboard with four slots spaced two gap widths apart, you can put four of those cards in without worrying too much about heat.

If you’re planning on open-air cards, the airflow would be obstructed in this case and the cards would throttle. You need at least one slot gap between cards of this type, and two or more is preferable. Special considerations also have to be taken for triple-width cards like the RTX 3090, and if you intend to use NVLink (see above).
Power Supply (PSU)
For high-end systems with more than four (sometimes even more than three) cards, a normal consumer-grade power supply won’t cut it. They tend to go up to 1600 watts, which is about the maximum that a standard 15 Amp circuit in a typical US household uses. Newer cards like the RTX3090 draw 350 watts by themselves, so 350x4 = 1400 watts just for the GPUs themselves. The other components will often draw a few hundred more, so you’ll already be running at 90-100% capacity (or more), which causes serious instability in the system.

In this case you may need to do a multi-PSU setup, or find a server-grade PSU that’s capable of more wattage (and find a higher amp circuit to plug the system into).

eGPUs
Any Windows PC or Mac with Thunderbolt 3 can use an eGPU enclosure to run a graphics card externally. Thunderbolt 3 isn’t as fast as an internal PCIe slot (it caps out at 4 PCIe lanes), so the card won’t run at max efficiency, but for something like a laptop or an iMac, it’s still going to yield considerable performance gains that you couldn’t otherwise get. A 2018 MacBook Pro with a Vega 20 (~50OB) will still see a nearly 4x gain in speed when paired with a Radeon 5700 XT in an eGPU enclosure.

Depending on the system, it’s technically possible to run multiple GPUs this way. Typically the number of Thunderbolt 3 busses are limited to two, and one GPU will eat up as much of the bandwidth for a single bus as it can, so keeping one enclosure on each bus is pretty crucial.

Nearly any standard form factor graphics card will work in an external enclosure, but the enclosure needs to have a large enough power supply to run the card and be physically large enough to accommodate it. Recent NVIDIA cards are getting larger and larger, so always check the enclosure manufacturer’s site to figure out whether the card will work.

Mac-specific considerations: Because of the lack of NVIDIA support on macOS, an NVIDIA card in an eGPU enclosure can’t be used on a Mac running a newer version of macOS. There are still older configurations that might work depending on your Mac (GTX1080 Ti on a Mac running High Sierra), but this type of setup can be difficult to get working and may exhibit problems that a supported card like the AMD 5700 XT wouldn’t have.

As with internal GPUs, current Macs can only use AMD cards in eGPUs featuring Vega, Navi or Polaris architecture.

As of right now, Macs with Apple Silicon processors (M1 chip) do NOT support eGPUS
Section 4: Obtaining a System

If you’ve made it this far, you can consider yourself pretty knowledgeable about what’s important in a system. So now it’s time to get one. As with everything else, there are a few options.

Off-the-shelf (Prebuilt)
You walk into a store and walk back out with a pre-made tower, plug it in and go.

The obvious advantage is that you can have something up and running today. These computers also have a warranty, so if they break, you can bring them back to the store. Prices are usually pretty good on these systems because they’re operating on razor-thin margins to compete with other manufacturers. Sometimes they go on sale as well.

The biggest disadvantage is that you’re stuck with the parts the builder uses, and nearly every pre-built out there is tuned for either gaming or typical business use - not rendering. The quality of the parts will often be sacrificed to keep costs down, and upgradeability will likely be an issue in the future.

Limited Configuration
Usually you’ll find this type of system on a manufacturer’s site (Apple, Dell, HP, etc). In this case most of the system is pre-built, but you get to pick some upgrades. If you’re after a  Windows-based laptop or a Mac, this and off-the-shelf are pretty much your options.

The main advantage is that these companies buy their components in bulk and do a good deal of testing, so you can typically get them more quickly than a full custom build and you can be sure the components will work together (PSU is large enough, etc). They also usually have a warranty and can be sent back if something goes wrong within that period.

The disadvantages are similar to an off-the-shelf build - you’re limited as to which components are used and companies will sometimes cut down on quality or expandability to keep costs lower. They are also a bit more expensive than off-the-shelf configurations and don’t go on sale as much.

Custom Configuration via a Builder
This method involves finding a trustworthy and reliable system builder, and then either using an extensive configurator on their site or working with a salesperson there to build a system. The builder (usually) tests and burns in the system and then sends it off to you.

The biggest advantage is (if done right), you can end up with a system that’s tuned for Octane and will work better than nearly all off-the-shelf or limited config systems. A good builder will do extensive testing on your build beforehand, and also provide a system-wide warranty and troubleshoot and fix the computer if anything goes wrong. Often, you can also get limited-availability parts faster through a builder than if you were to try to source them yourself.

The main disadvantages are time and cost. You’ll want to utilize this guide more than you would for the first two methods to make sure you know what components and features you’re after, so there’s a pretty large research project involved before you even go on the site. You also want to spend some time researching the builder themselves - reading reviews and asking around. Getting all the parts and actually building the system takes time, and testing and burn-in adds to it. This way is almost always going to be the most expensive way - you’re paying extra for the builder’s time and expertise.

DIY
This is either the most fun & satisfying, or more infuriating way to get a system, depending on your personality type, confidence working with your hands, and a bit of luck. This involves a lot of time spent researching each component to know exactly what will work with what, and then hunting through various sites to find each component at the best price possible. Depending on the amount of time and patience you have, you’ll either need to wait for each part to come available, or have plans B, C and D in place to get alternate components that will still work for you. Once all the pieces arrive, you build the system yourself (easier than it used to be, but still intimidating for some), and then test it, install the OS, and then do all the fine-tuning necessary.

The obvious advantage here is that you’re getting exactly what you want, and you gain a much more intimate knowledge of your system and hardware in general. This can also be the cheapest way to get a good system if you are able to find good prices on components. It can be a fun and rewarding project for a patient and tech-minded person, plus you get bragging rights on Reddit.

There are disadvantages as well.  Time is a big one - all that research, hunting for parts and waiting for things to ship isn’t fast. There’s also reliability - nearly all components come with their own warranty, but troubleshooting exactly which component is the problem can be time consuming and frustrating if something goes wrong. If you’re careful and watch a few guides, it’s unlikely that you’ll break something, but there’s always the possibility of spilling a cup of coffee on the motherboard or breaking off a USB header pin when trying to install a GPU into the bottom slot, and then you have to live with the consequences.

If you’re going to do it this way and are building an air-cooled system, be sure to look into how to properly route cables so you don’t impede the airflow through the case.

There are tools out there like PCPartpicker that will help you keep track of your build and test compatibility between components. You should also check a wattage calculator like Seasonic’s to make sure you have enough power for your whole build (remember to leave at least 20% on top of what they recommend, if or 50% if you can for stability)
Conclusion
As with most specialized fields, there’s a lot of information here. Take your time, go through it all slowly, and ask questions, and you’ll be able to get a computer tuned for Octane Render that caters well to your working style and skill level.
Hardware Guide for Octane Render
Published: